Goto

Collaborating Authors

 social choice mdp


Social Choice with Changing Preferences: Representation Theorems and Long-Run Policies

Kulkarni, Kshitij, Neth, Sven

arXiv.org Artificial Intelligence

We study group decision making with changing preferences as a Markov Decision Process. We are motivated by the increasing prevalence of automated decision-making systems when making choices for groups of people over time. Our main contribution is to show how classic representation theorems from social choice theory can be adapted to characterize optimal policies in this dynamic setting. We provide an axiomatic characterization of MDP reward functions that agree with the Utilitarianism social welfare functionals of social choice theory. We also provide discussion of cases when the implementation of social choice-theoretic axioms may fail to lead to long-run optimal outcomes.


Dynamic Social Choice with Evolving Preferences

Parkes, David C. (Harvard University) | Procaccia, Ariel D. (Carnegie Mellon University)

AAAI Conferences

Social choice theory provides insights into a variety of collective decision making settings, but nowadays some of its tenets are challenged by internet environments, which call for dynamic decision making under constantly changing preferences. In this paper we model the problem via Markov decision processes (MDP), where the states of the MDP coincide with preference profiles and a (deterministic, stationary) policy corresponds to a social choice function. We can therefore employ the axioms studied in the social choice literature as guidelines in the design of socially desirable policies. We present tractable algorithms that compute optimal policies under different prominent social choice constraints. Our machinery relies on techniques for exploiting symmetries and isomorphisms between MDPs.


A Dynamic Rationalization of Distance Rationalizability

Boutilier, Craig (University of Toronto) | Procaccia, Ariel D. (Carnegie Mellon University)

AAAI Conferences

Distance rationalizability is an intuitive paradigm for developing and studying voting rules: given a notion of consensus and a distance function on preference profiles, a rationalizable voting rule selects an alternative that is closest to being a consensus winner. Despite its appeal, distance rationalizability faces the challenge of connecting the chosen distance measure and consensus notion to an operational measure of social desirability. We tackle this issue via the decision-theoretic framework of dynamic social choice, in which a social choice Markov decision process (MDP) models the dynamics of voter preferences in response to winner selection. We show that, for a prominent class of distance functions, one can construct a social choice MDP, with natural preference dynamics and rewards, such that a voting rule is (votewise) rationalizable with respect to the unanimity consensus for a given distance function iff it is a (deterministic) optimal policy in the MDP. This provides an alternative rationale for distance rationalizability, demonstrating the equivalence of rationalizable voting rules in a static sense and winner selection to maximize societal utility in a dynamic process.